Goto

Collaborating Authors

 fine-grained optimization


Fine-grained Optimization of Deep Neural Networks

Neural Information Processing Systems

In recent studies, several asymptotic upper bounds on generalization errors on deep neural networks (DNNs) are theoretically derived. These bounds are functions of several norms of weights of the DNNs, such as the Frobenius and spectral norms, and they are computed for weights grouped according to either input and output channels of the DNNs. In this work, we conjecture that if we can impose multiple constraints on weights of DNNs to upper bound the norms of the weights, and train the DNNs with these weights, then we can attain empirical generalization errors closer to the derived theoretical bounds, and improve accuracy of the DNNs. To this end, we pose two problems. First, we aim to obtain weights whose different norms are all upper bounded by a constant number.


Reviews: Fine-grained Optimization of Deep Neural Networks

Neural Information Processing Systems

It is difficult to parse when new notation is introduced right before it is used or not at all - Subsections in section 3 e.g., -- "In order to ..." should be a new section -- The 2 results should be listed as sub-headings -- How these results are incorporated into algorithm should be listed immediately after Section 4: - Compress table 1 & 2 captions - Notation to be described separately and not merged in bullets - Derivation of Lines 5,6,7 should be explained in greater detail


Reviews: Fine-grained Optimization of Deep Neural Networks

Neural Information Processing Systems

The reviewers were generally impressed by the quality of technical contributions, but had concerns about the clarity of the work. In particular, there was a shared sense that the paper was very densely written and difficult to understand. The authors promise many revisions, which reviewers are taking in good faith. Reviewer 1 has left very detailed comments about how to revise the paper to improve clarity. Please take these very seriously and undertake to make revisions with these points in mind for a final version.


Fine-grained Optimization of Deep Neural Networks

Neural Information Processing Systems

In recent studies, several asymptotic upper bounds on generalization errors on deep neural networks (DNNs) are theoretically derived. These bounds are functions of several norms of weights of the DNNs, such as the Frobenius and spectral norms, and they are computed for weights grouped according to either input and output channels of the DNNs. In this work, we conjecture that if we can impose multiple constraints on weights of DNNs to upper bound the norms of the weights, and train the DNNs with these weights, then we can attain empirical generalization errors closer to the derived theoretical bounds, and improve accuracy of the DNNs. To this end, we pose two problems. First, we aim to obtain weights whose different norms are all upper bounded by a constant number.


Fine-grained Optimization of Deep Neural Networks

Ozay, Mete

Neural Information Processing Systems

In recent studies, several asymptotic upper bounds on generalization errors on deep neural networks (DNNs) are theoretically derived. These bounds are functions of several norms of weights of the DNNs, such as the Frobenius and spectral norms, and they are computed for weights grouped according to either input and output channels of the DNNs. In this work, we conjecture that if we can impose multiple constraints on weights of DNNs to upper bound the norms of the weights, and train the DNNs with these weights, then we can attain empirical generalization errors closer to the derived theoretical bounds, and improve accuracy of the DNNs. To this end, we pose two problems. First, we aim to obtain weights whose different norms are all upper bounded by a constant number.